training regime
- North America > United States (0.14)
- North America > Canada > Ontario > Toronto (0.04)
- Europe > Switzerland (0.04)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- North America > Canada > British Columbia > Vancouver (0.04)
- (7 more...)
5aea56eefab60e06f35016478e21aae6-Supplemental-Conference.pdf
A.2 DerivationsforSection3.1 We begin with a formal derivation of the formulas in Section 3.1. We remind that we consider a function F(θ) whose parameters can be split inton SI groups: θ = (θ1,...,θn). We solve an optimization problem(1)with projected gradient descent(2). Remark2 The above formulation allegedly lacks the third (divergent) regime. If, conversely, η > 1Pn i=1αi, then at each iteration at least one of the individual ELRs exceeds its convergencethreshold: ηi > 1αi.
- Asia > Russia (0.14)
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.04)
- Africa > Ethiopia > Addis Ababa > Addis Ababa (0.04)
A Training Regime
For the Spectral Mixture Kernel, we use 4 mixtures. The CNF component for our model was inspired by FFJORD. For NGGP, we use the same CNF component architecture as in for the sines dataset. Adding noise allows for better performance when learning with the CNF component. We also use the same CNF component architecture as in the sines dataset. For this dataset, we tested NGGP and DKT models with RBF and Spectral kernels only.
- Asia > Myanmar > Tanintharyi Region > Dawei (0.05)
- Asia > China > Guangdong Province > Shenzhen (0.05)
- Asia > China > Hong Kong (0.04)
- (2 more...)
- North America > United States > Washington (0.04)
- North America > Canada (0.04)
- Africa > Middle East > Tunisia > Ben Arous Governorate > Ben Arous (0.04)
Robustness in deep learning: The good (width), the bad (depth), and the ugly (initialization)
We study the average robustness notion in deep neural networks in (selected) wide and narrow, deep and shallow, as well as lazy and non-lazy training settings. We prove that in the under-parameterized setting, width has a negative effect while it improves robustness in the over-parameterized setting. The effect of depth closely depends on the initialization and the training mode. In particular, when initialized with LeCun initialization, depth helps robustness with the lazy training regime. In contrast, when initialized with Neural Tangent Kernel (NTK) and He-initialization, depth hurts the robustness. Moreover, under the non-lazy training regime, we demonstrate how the width of a two-layer ReLU network benefits robustness. Our theoretical developments improve the results by [Huang et al.